Learning Belief Networks in the Presence of Missing Values and Hidden Variables
نویسنده
چکیده
In recent years there has been a flurry of works on learning probabilistic belief networks. Current state of the art methods have been shown to be successful for two learning scenarios: learning both network structure and parameters from complete data, and learning parameters for a fixed network from incomplete data—that is, in the presence of missing values or hidden variables. However, no method has yet been demonstrated to effectively learn network structure from incomplete data. In this paper, we propose a new method for learning network structure from incomplete data. This method is based on an extension of the Expectation-Maximization (EM) algorithm for model selection problems that performs search for the best structure inside the EM procedure. We prove the convergence of this algorithm, and adapt it for learning belief networks. We then describe how to learn networks in two scenarios: when the data contains missing values, and in the presence of hidden variables. We provide experimental results that show the effectiveness of our procedure in both scenarios.
منابع مشابه
Incremental Learning of Bayesian Networks with Hidden Variables
In this paper, an incremental method for learning Bayesian networks based on evolutionary computing, IEMA, is put forward. IEMA introduces the evolutionary algorithm and EM algorithm into the process of incremental learning, can not only avoid getting into local maxima, but also incrementally learn Bayesian networks with high accuracy in presence of missing values and hidden variables. In addit...
متن کامل"Ideal Parent" Structure Learning for Continuous Variable Bayesian Networks
Bayesian networks in general, and continuous variable networks in particular, have become increasingly popular in recent years, largely due to advances in methods that facilitate automatic learning from data. Yet, despite these advances, the key task of learning the structure of such models remains a computationally intensive procedure, which limits most applications to parameter learning. This...
متن کاملBelief Functions Based Parameter and Structure Learning of Bayesian Networks in the Presence of Missing Data
Existing methods of parameter and structure learning of Bayesian Networks (BNs) from a database assume that the database is complete. If there are missing values, they are assumed to be missing at random. This paper incorporates the concepts used in Dempster-Shafer theory of belief functions to learn both the parameters and structure of BNs. Instead of filling the missing values by their estima...
متن کاملThe Bayesian Structural EM Algorithm
In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete data—that is, in the presence of missing values or hidden variables. In a recent paper, I introduced an algorithm called Structural EM that combines the standard Expectation Maximization (EM)...
متن کاملPrediction of breeding values for the milk production trait in Iranian Holstein cows applying artificial neural networks
The artificial neural networks, the learning algorithms and mathematical models mimicking the information processing ability of human brain can be used non-linear and complex data. The aim of this study was to predict the breeding values for milk production trait in Iranian Holstein cows applying artificial neural networks. Data on 35167 Iranian Holstein cows recorded between 1998 to 2009 were ...
متن کامل